Despite a sea of interpretability methods that can produce plausible explanations, the field has also empirically seen many failure cases of such methods. In light of these results, it remains unclear for practitioners how to use these methods and choose between them in a principled way. In this paper, we show that for even moderately rich model classes (easily satisfied by neural networks), any feature attribution method that is complete and linear--for example, Integrated Gradients and SHAP--can provably fail to improve on random guessing for inferring model behaviour. Our results apply to common end-tasks such as identifying local model behaviour, spurious feature identification, and algorithmic recourse. One takeaway from our work is the importance of concretely defining end-tasks. In particular, we show that once such an end-task is defined, a simple and direct approach of repeated model evaluations can outperform many other complex feature attribution methods.
translated by 谷歌翻译
Explainability has become a central requirement for the development, deployment, and adoption of machine learning (ML) models and we are yet to understand what explanation methods can and cannot do. Several factors such as data, model prediction, hyperparameters used in training the model, and random initialization can all influence downstream explanations. While previous work empirically hinted that explanations (E) may have little relationship with the prediction (Y), there is a lack of conclusive study to quantify this relationship. Our work borrows tools from causal inference to systematically assay this relationship. More specifically, we measure the relationship between E and Y by measuring the treatment effect when intervening on their causal ancestors (hyperparameters) (inputs to generate saliency-based Es or Ys). We discover that Y's relative direct influence on E follows an odd pattern; the influence is higher in the lowest-performing models than in mid-performing models, and it then decreases in the top-performing models. We believe our work is a promising first step towards providing better guidance for practitioners who can make more informed decisions in utilizing these explanations by knowing what factors are at play and how they relate to their end task.
translated by 谷歌翻译
We investigate whether three types of post hoc model explanations--feature attribution, concept activation, and training point ranking--are effective for detecting a model's reliance on spurious signals in the training data. Specifically, we consider the scenario where the spurious signal to be detected is unknown, at test-time, to the user of the explanation method. We design an empirical methodology that uses semi-synthetic datasets along with pre-specified spurious artifacts to obtain models that verifiably rely on these spurious training signals. We then provide a suite of metrics that assess an explanation method's reliability for spurious signal detection under various conditions. We find that the post hoc explanation methods tested are ineffective when the spurious artifact is unknown at test-time especially for non-visible artifacts like a background blur. Further, we find that feature attribution methods are susceptible to erroneously indicating dependence on spurious signals even when the model being explained does not rely on spurious artifacts. This finding casts doubt on the utility of these approaches, in the hands of a practitioner, for detecting a model's reliance on spurious signals.
translated by 谷歌翻译
每年,在越来越复杂的多种域名,包括GO,Poker和Starcraft II在内的著名示例中都能达到专家级的性能。这种快速的进步伴随着相应的需求,以更好地了解这种代理如何实现这种绩效,以实现其安全的部署,确定局限性并揭示其改善它们的潜力。在本文中,我们从以性能为中心的多种学习中退后一步,而是将注意力转向代理行为分析。我们介绍了一种模型 - 反应方法,用于使用变异推理在多种基因域中发现行为簇,以学习关节和局部代理水平的行为层次结构。我们的框架没有对代理的基础学习算法的假设,不需要访问其潜在状态或模型,并且可以使用完全离线观察数据进行培训。我们说明了我们方法在联合和地方代理层面上对行为的耦合理解的有效性,在整个培训过程中检测行为变更点,发现核心行为概念(例如,那些促进更高回报的核心行为概念)的有效性,并证明了方法的可扩展性高维的多基金会木叶控制结构域。
translated by 谷歌翻译
智能决策支持(IDS)系统利用人工智能技术来产生通过任务的决策阶段引导人类用户的建议。但是,关键挑战是IDS系统并不完美,并且在复杂的真实方案中可能会产生不正确的输出或者无法完全工作。可解释的AI规划领域(XAIP)寻求开发技巧,使得顺序决策的决策使AI系统更可扩展到最终用户。批判性地,在将XAIP技术应用于IDS系统的情况下,已经假设计划员提出的计划始终是最佳的,因此建议作为对用户的决策支持建议的动作或计划始终是正确的。在这项工作中,我们研究了与非强大IDS系统的新手用户交互 - 偶尔推荐错误动作的互动,并且在用户习惯于其指导后可能会变得无法使用。我们介绍了一种新颖的解释类型,基于基于划分的基于规划的IDS系统的解释,可以使用有关推荐行动将有所贡献的子群的信息来补充传统的IDS输出。我们展示基于子群的解释导致改善用户任务性能,提高用户辨别最佳和次优ID的能力,是用户的首选,并在IDS失败的情况下启用更强大的用户性能
translated by 谷歌翻译
机器学习已成功应用于系统应用程序(如内存预取和缓存),其中已知模型已显示出优于HeuRistics。然而,缺乏了解这些模型的内部工作 - 可解释性 - 仍然是在现实世界部署中采用的主要障碍。了解模型的行为可以帮助系统管理员,开发人员在模型中获得信心,了解生产中的风险和调试意外行为。计算机系统中使用的模型的可解释性构成了特定的挑战:与在图像或文本上培训的ML模型不同,输入域(例如,内存访问模式,程序计数器)不是立即解释的。因此,一项重大挑战是在对人类从业者达成的概念方面解释该模型。通过分析最先进的高速缓存模型,我们提供了证据表明该模型已经学习超出可以利用解释的简单统计数据的概念。我们的工作为系统ML模型的解读提供了第一步,并突出了这一新兴研究区域的承诺和挑战。
translated by 谷歌翻译
Superhuman神经网络代理如alphazero是什么?这个问题是科学和实际的兴趣。如果强神经网络的陈述与人类概念没有相似之处,我们理解他们的决定的忠实解释的能力将受到限制,最终限制了我们可以通过神经网络解释来实现的。在这项工作中,我们提供了证据表明,人类知识是由alphapero神经网络获得的,因为它在国际象棋游戏中列车。通过探究广泛的人类象棋概念,我们在alphazero网络中显示了这些概念的时间和地点。我们还提供了一种关注开放游戏的行为分析,包括来自国际象棋Grandmaster Vladimir Kramnik的定性分析。最后,我们开展了初步调查,观察alphazero的表现的低级细节,并在线提供由此产生的行为和代表性分析。
translated by 谷歌翻译
Saliency methods have emerged as a popular tool to highlight features in an input deemed relevant for the prediction of a learned model. Several saliency methods have been proposed, often guided by visual appeal on image data. In this work, we propose an actionable methodology to evaluate what kinds of explanations a given method can and cannot provide. We find that reliance, solely, on visual assessment can be misleading. Through extensive experiments we show that some existing saliency methods are independent both of the model and of the data generating process. Consequently, methods that fail the proposed tests are inadequate for tasks that are sensitive to either data or model, such as, finding outliers in the data, explaining the relationship between inputs and outputs that the model learned, and debugging the model. We interpret our findings through an analogy with edge detection in images, a technique that requires neither training data nor model. Theory in the case of a linear model and a single-layer convolutional neural network supports our experimental findings 2 . * Work done during the Google AI Residency Program. 2 All code to replicate our findings will be available here: https://goo.gl/hBmhDt 3 We refer here to the broad category of visualization and attribution methods aimed at interpreting trained models. These methods are often used for interpreting deep neural networks particularly on image data.
translated by 谷歌翻译
We propose an empirical measure of the approximate accuracy of feature importance estimates in deep neural networks. Our results across several large-scale image classification datasets show that many popular interpretability methods produce estimates of feature importance that are not better than a random designation of feature importance. Only certain ensemble based approaches-VarGrad and SmoothGrad-Squared-outperform such a random assignment of importance. The manner of ensembling remains critical, we show that some approaches do no better then the underlying method but carry a far higher computational burden.
translated by 谷歌翻译
The interpretation of deep learning models is a challenge due to their size, complexity, and often opaque internal state. In addition, many systems, such as image classifiers, operate on low-level features rather than high-level concepts. To address these challenges, we introduce Concept Activation Vectors (CAVs), which provide an interpretation of a neural net's internal state in terms of human-friendly concepts. The key idea is to view the high-dimensional internal state of a neural net as an aid, not an obstacle. We show how to use CAVs as part of a technique, Testing with CAVs (TCAV), that uses directional derivatives to quantify the degree to which a user-defined concept is important to a classification result-for example, how sensitive a prediction of zebra is to the presence of stripes. Using the domain of image classification as a testing ground, we describe how CAVs may be used to explore hypotheses and generate insights for a standard image classification network as well as a medical application.
translated by 谷歌翻译